home *** CD-ROM | disk | FTP | other *** search
- OK, I found a number of short explanations on the Net regarding this, but I
- wanted to construct a longer one.
-
- IRIX 5.3 introduces new facilities for synchronization of audio and other media.
-
- In the Audio Library, the two new functions which correspond to these features
- are:
- ALgetframetime
- ALgetframenumber
-
- You can find simplistic example code synchronizing audio & MIDI in the 4Dgifts
- directory of your SGI:
- ~4Dgifts/examples/midi/syncrecord
- This requires Archer Sully's little Midi File Library, which is also included
- in 4Dgifts.
-
- Synchronizing audio & other media really means synchronizing the media
- at the jacks on the back of the machine. This implies that we need
- some method of determining exactly when media data came in an input jack
- and scheduling exactly when it will reach an output jack.
-
- First, some definitions.
-
- Unadjusted System Time ("UST")
- ----------------------------
-
- This is a shared timeline between all the media. It called "unadjusted"
- because it is never adjusted by "timed" or other agents. UST is a high
- resolution clock returned as a 64-bit number in nanoseconds. The
- resolution of the actual clock used for UST will vary, but it is always
- less than 1 microsecond.
-
- Sample Frame Counters
- ---------------------
- We need a way to reference a particular sample frame in an input
- or output stream. So we define a "sample frame counter" at each
- input or output device. This just counts all the sample frames that
- go by, starting with 0, and incrementing until the system is rebooted.
- I should emphasize that the sample frame counter is defined at the
- *device*, not the *port* level. In other words, if my system's been
- up for a while, and I start a new application which opens an audio port,
- the first sample frame number coming into the port will in general NOT
- be 0. It's likely to be a very large number, since I'm picking up the
- audio from the input device in mid-stream.
-
- Because sample frame numbers are referenced to the device, two different
- input applications running simultaneously will have a shared timeline. If
- each gets a sample numbered "10000," it will be the same sample in each
- application. Sample-frame numbering is sample-accurate.
-
- Similarly,because sample frame numbers are referenced to the device,
- two different output applications running simultaneously will have a
- shared timeline (because they share the same device). If each
- application puts out a sample at frame "10000," the two samples will
- arrive at the output device simultaneously and be mixed together. I've
- actually written an application which puts out two sine-waves 180
- degrees out of phase from another application, and the output of the
- two applications will exactly cancel.
-
- ALgetframenumber will return the sample-frame number, relative to the
- device, of the next frame to be read or written from a port. Note that
- while a port is not overflowing or underflowing, this number stays
- constant. This is because the port is a queue; on an input port, the
- sample frame number of the queue head stays constant while samples pile
- up behind it. On an output port, the sample frame number of the queue
- tail stays constant while samples drain out the other end to the audio
- device. However, if you're underflowing or overflowing, this number
- will be changing (and you can't synchronize until you get out of underflow
- or overflow).
-
- If I want to output audio at a particular sample frame, I can just
- write 0's to the port until I get to that sample frame. Similarly, if I
- want to read a particular sample frame from input, I can drop the input
- until I get there.
-
- Time Synchronization
- --------------------
- Here's how we describe how to start & stop things at the same time, and
- how to make precise temporal calculations relative to audio.
-
- OK, so now we know how to determine which sample-frame number we're reading
- or writing. How can we relate the sample-frame numbers to the time at which
- a sample frame came in or will go out?
-
- The ALgetframetime call returns an atomic pair of (UST, sample frame). The
- pair is very carefully defined: the UST is the time at which the given sample
- frame came into the machine or will go out of the machine. The software will
- compensate for all the group delays through the A/D or D/A converters, if
- necessary.
-
- Note that you do not have any control over which pair is returned by
- ALgetframetime. It is merely guaranteed to return a "recent" pair. So you
- have to do a tiny bit of algebra to figure out what you want.
-
- Here's an example.
-
- I want to know at what time (UST) the next sample I write will go out the jack.
- I call ALgetframetime() to get some (UST, sample frame) pair. Suppose it
- returns (t, 10000). So I know that sample frame 10000 hit the output jack at
- time t. Now I call ALgetframenumber() to determine what the sample frame number
- is for the next sample frame to be written. It tells me 11000 (because in
- general the sample I'm writing into the port is newer than the sample going
- out the jack).
-
- So I can now determine the time at which sample frame 11000 goes out. It's just
- t + (11000-10000)*(#nanoseconds/sample)
- The number of nanoseconds per sample can be precomputed from the sample rate.
-
- Note that the equation is exactly the same for an input port -- if you use
- signed values for the sample-frame numbers.
-
- Here's a little more complicated example.
-
- Suppose I want my audio to start at the same time as a particular video frame.
-
- I determine the UST t at which that video frame is to go out (not hard given
- the frame rate and a UST/frame-number pair from video).
-
- Then I calculate the audio sample frame N which corresponds to that UST
- (t): I call ALgetframetime and do the algebra using the pair I get back
- and my knowledge of the sample rate.
-
- Now I determine the next sample frame N0 in my port, using ALgetframenumber.
- If N0 is not < N, I've already missed the video frame. If N0 < N, I write
- (N-N0) sample frames of 0's into the port, followed by my audio.
-
- For MIDI, I can determine the UST of an input message, and schedule an output
- message using UST, so the concepts are similar.
-
- Rate Synchronization
- --------------------
-
- We still have the problem of drift: once I start things in sync, the rates of
- the different media streams will be slightly different, so the media will
- drift.
-
- A common way to solve this is to actually lock the clock of the audio
- device to that of the video device (or other audio device). You can run
- the A/D, AES transmitter, and D/A off of the recovered clock from the
- AES digital input (use the "digital" rate).
-
- To slave one audio device to another, you just run the AES output of
- the master into the AES input of the slave, and run the slave off of
- the recovered clock from its AES input.
-
- You can generate an AES clock from a video stream using an external
- box such as the TimeLine MicroLynx (TimeLine Vista, Inc: (619) 727-3300).
-
- Summary
- -------
-
- Hopefully this will get you started. I have some code examples which I'll
- include here soon.
-